6 December 2024
AI, Media and Democracy Lab: Why Bother with AI Transparency?
Key Concerns
Participants expressed concerns about AI in journalism, including:
One participant summed it up: “If you can’t tell what’s real or fake, it could lead to distrust in everything we read.”
The Desire for Disclosure
Despite concerns, participants strongly favored transparency. They expressed a clear desire for AI-generated content to be distinctly labeled, with many calling for:
Recommendations for News Organizations
From these findings, three key takeaways emerged:
- Transparency is essential: Clearly label AI-generated content to address public concerns.
- Tailored disclosures: Different audiences may need varying levels of detail about AI involvement.
- Build digital literacy: Combine transparency with education to empower readers in navigating AI content.
While transparency alone won’t solve all trust issues in journalism, it’s a critical step. As one participant suggested, “I don’t mind AI-written articles, but I want to know. Just be clear.”As AI adoption continues to grow, news organizations must balance innovation with accountability, ensuring audiences remain informed and engaged.
This research reflects an important step toward understanding how transparency can rebuild trust in AI-driven journalism.Stay connected with the AI, Media and Democracy Lab.
Vergelijkbaar >
Similar news items

28 April 2025
AI020 Conference Brings Together 400+ AI Experts in Amsterdam
read more >

April 27, 2025
Watchdog raises alarm: act now to stop your Instagram photos from being used for AI
read more >

April 27, 2025
Amsterdam struggles with future of data centers and digital ambitions
read more >